Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "NewsGuard"


15 mentions found


Sports Illustrated published articles by fake authors with AI-generated profile pictures, Futurism reported. The magazine denied using AI but said it would take down the articles while it investigated. The magazine said it will take down several articles after a report found they were written by fake, AI-generated authors. Futurism identified two Sports Illustrated writers, " Drew Ortiz" and " Sora Tanaka ," whose biographies appeared to be fake. In a statement to Futurism, Sports Illustrated owner Arena Group denied publishing AI-generated articles but said they were removing the pieces while an internal investigation took place.
Persons: , Drew Ortiz, Sora Tanaka, Ortiz's, Ross Levinsohn Organizations: Service, Sports Illustrated, Sports, Arena Group, AdVon Commerce, Arena, Gannett, Tech, CNET, Business Locations: NewsGuard
Democrats accuse X of profiting from Hamas propaganda
  + stars: | 2023-11-21 | by ( Brian Fung | ) edition.cnn.com   time to read: +4 min
Washington CNN —A group of House Democrats has accused X, the platform formerly known as Twitter, of profiting from Hamas propaganda and misinformation about the Israel-Hamas war after reports by independent researchers found numerous accounts glorifying the US-sanctioned terror group. More than two dozen US lawmakers signed the letter dated Tuesday addressed to X owner Elon Musk and CEO Linda Yaccarino. Musk sued Media Matters on Monday, accusing it of distorting the likelihood that ads may be shown against extremist material. But some legal critics have cast doubt on the complaint, calling it “weak” and “bogus” in the face of the First Amendment. The letter calls for Yaccarino and Musk to answer by December 1 to allegations that X has amplified terrorist propaganda in violation of its own policies.
Persons: X, Elon Musk, Linda Yaccarino, , Adam Schiff of California, Daniel Goldman, Jamie Raskin, Musk, Joe Benarroch, White Organizations: Washington CNN, House, Twitter, Institute for Strategic, Tech, Reps, Maryland, of Business, Media Matters, Media, Committee, US Marshals Service Locations: Israel, New York, Gaza
The Consequences of Elon Musk’s Ownership of XNow rebranded as X, the site has experienced a surge in racist, antisemitic and other hateful speech. Research conducted in part by the Institute for Strategic Dialogue concluded that anti-Semitic tweets in English more than doubled after Mr. Musk’s takeover. Keeping X at the center of public debate is exactly Mr. Musk’s goal, which he describes at times with a messianic zeal. Even worse, the article argued, Mr. Musk’s changes appear to be boosting the engagements of the most contentious users. A month into Mr. Musk’s ownership, the platform stopped enforcing its policy against Covid-19 misinformation.
Persons: Elon Musk, , , Musk’s, , Musk, Tim Chambers, ” Mr, Chambers, Tesla, lockdowns, Thierry Breton, Mr Organizations: Elon, Twitter, “ Twitter, Hamas, Dewey, Group, Defamation, Research, Institute for Strategic, Commission, Kremlin, Pentagon, Tufts, Rutgers, Montclair, 4chan, Harvard Kennedy School, Covid, Media, Mr, Commission's Digital Services, Services, Defamation League, European Union Locations: Musk’s, Russia, China, Israel, Ukraine, Iran, guardrails
NewsGuard identified seven accounts it describes as “misinformation superspreaders,” which have shared widely debunked claims about the conflict. Verified users are also eligible to receive payments from the platform, financially incentivizing posts from the users who are actively spreading misinformation. The group said 186 of the 250 posts were sent by premium X accounts. Soon after NewsGuard published its report Thursday, Musk posted on X that the company “should be disbanded immediately.” CNN has reached out to X and Musk for further comment. In May, there was a brief dip in the stock market after verified accounts on X shared fake images of a purported “explosion” near the Pentagon.
Persons: NewsGuard, ” NewsGuard, Musk, , Linda Yaccarino Organizations: CNN, Elon, Twitter, Hamas, ” CNN, European Union, EU, Meta, X, Pentagon Locations: Israel
REGULATORY SCRUTINYWhile disinformation has spread on all major social media platforms including Facebook and TikTok, X appeared to be the most recent to draw scrutiny from regulators. On Tuesday, European Union Commissioner Thierry Breton warned Musk that X was spreading "illegal content and disinformation," according to a letter Breton posted on X. Musk himself recommended that X users follow two accounts that had previously spread false claims for "real-time" updates on the conflict, the Washington Post reported. False information has also spread on messaging app Telegram and short-form video app TikTok, said DFRLab's Trad. Like other online platforms, YouTube has moderation employees and technology to remove content that violates its rules.
Persons: Dado Ruvic, Ruslan Trad, X, Bruno Mars miscaptioned, Thierry Breton, Breton, Musk, Renee DiResta, Jack Brewster, Brewster, Tamara Kharroub, DFRLab's Trad, TikTok, Solomon Messing, there's, Messing, Kharroub, Sheila Dang, Riniki Sanyal, Deepa Babington Organizations: Twitter, REUTERS, Elon, European Union, Reuters, Atlantic, Forensic Research, Hamas, Meta, Facebook, European, EU, Stanford Internet Observatory, Washington Post, Washington, Arab Center Washington DC, New York University's Center for Social Media, YouTube, Thomson Locations: Israel, American, New, Dallas, Bangalore
Workers using OpenAI's ChatGPT may actually perform more poorly than those who don't, new BCG research finds. AdvertisementAdvertisementIf you're using ChatGPT as a shortcut at work, you may want to be extra careful. For tasks "inside the frontier," consultants using AI were "significantly more productive" and "produced significantly higher quality results" than those who weren't using the chatbot. BCG's findings demonstrate a cautionary tale for workers thinking about using ChatGPT to help do their jobs. AI-generated errors may only get worse: In a recent paper, AI researchers found that generative AI models could soon be trained on AI-generated content — a phenomenon they call "model collapse."
Persons: , it's, Wharton, Saren Rajendran, ChatGPT, NewsGuard, Bard, James Webb Organizations: Service, Boston Consulting, Harvard, MIT, CNET, James Webb Space Telescope
By comparison, its closest mainstream cousin, the Google-owned video service YouTube, has billions of monthly logged-in users, a spokesperson said. In an emailed response, they said the leaderboard rankings are generated from the most liked recent videos on the site. However, YouTube, unlike Rumble, has said it will continue to remove content that tries to deceive voters in the 2024 elections. The RNC said it was taken down to direct users to the debate livestream and avoid confusing viewers with multiple videos. Other comments on his recent videos told Trump to “execute” Democrats and suggested that someone should “build a lot of gallows.”Rumble said it removed those comments in response to AP's inquiry.
Persons: Ronna McDaniel, , Peter Thiel, Republican Sen, J.D, Vance of Ohio, Ron DeSantis, Vivek Ramaswamy, Donald Trump, Robert F, Kennedy Jr, Rumble, Joe Biden, Trump, Chris Pavlovski, Jared Holt, “ It's, Holt Organizations: Fox Business Network, Univision, GOP, Republican National Committee, Facebook, YouTube, Big Tech, RNC, Republican, Google, Florida Gov, Democratic, Meta, Associated Press, US, Institute for Strategic, AP Locations: London
Generative AI could soon be trained on AI-generated content — and experts are raising the alarm. The new term comes as AI-generated content filled with errors continues to flood the internet. Other AI researchers have coined their own terms to describe the training method. Jathan Sadowski, a senior fellow at the Emerging Technologies Research Lab in Australia who researches AI, called this phenomenon "Habsburg AI," arguing that AI systems heavily trained on outputs of other generative AI tools can create "inbred mutant" responses that contain "exaggerated, grotesque features." These new terms come as AI-generated content has flooded the internet since OpenAI launched ChatGPT last November.
Persons: Jathan, paywalls, Ray Wang, Baji, Cohere, OpenAI, ChatGPT, It's, Gizmodo, Kai, Cheng Yang, OpenAI's chatbot, Yang Organizations: University of Oxford, University of Cambridge, Stanford, Rice, Emerging Technologies, Constellation Research, CNET, Microsoft, Ottawa Food Bank Locations: Australia, Ottawa
One of the latest is flooding social media with spam bots and AI-generated content that could further degrade the quality of information on the internet. Botnets are networks of hundreds of harmful bots and spam campaigns on social media that can go undetected by current anti-spam filters. We can still detect AI-generated spam — for nowBoth NewsGuard and the paper's researchers were separately able to unearth AI-generated spam content using an obvious tell that chatbots currently have. AdvertisementAdvertisementResearchers look for when these responses slip out in an automated bot's content, whether on a webpage or in a tweet. AdvertisementAdvertisementOne such measure was tagging AI-generated content with a hidden label to help people distinguish it from content made by humans, per the White House.
Persons: Kai, Cheng Yang, Filippo Menczer, Yang, Menczer, ChatGPT, chatbots, Wei Xu, Europol, Xu, Biden Organizations: Indiana University, Twitter, ChatGPT, Indiana University's Observatory, Social Media, telltale, Georgia Institute of Technology, Regulators, Google, Microsoft, House Locations: Indiana
A report from Europol expects a mind-blowing 90% of internet content to be AI-generated in a few years. A report from Europol, the European Union's law-enforcement agency, expects a mind-blowing 90% of internet content to be AI-generated in a few years. And while AI bots have telltale signs now, experts indicate that they will soon get better at mimicking humans and evading the detection systems developed by Menczer and social networks. While misinformation has long been a problem with the internet, AI is going to blow our old problems out of the water. But security researchers have discovered that the AI bots in your apps and devices might steal sensitive information for the hackers.
Persons: HBO Max, haven't, ChatGPT, Christian Selig, Reddit, Martijn Pieters, He'd, NewsGuard, Gordon Crovitz, NewsGuard's, Filippo Menczer, NewsGuard's Crovitz, Christopher Cowell, Cowell, John Licato, Bing, Florian Tramèr, Toby Walsh, Walsh, Shubham Agarwal Organizations: HBO, Europol, Market, Indiana University's Observatory, Social Media, Facebook, Microsoft, Google, University of South, ETH Zürich, University of New, University of Oxford, Wired, Company Locations: Cambridge, Europol, Portland , Oregon, Etsy, University of South Florida, University of New South Wales, Ahmedabad, India
AIs trained solely on other AIs will eventually spiral into gibberish, machine learning experts say. As more and more AI-generated content is published online, future AIs trained on this material will ultimately spiral into gibberish, machine learning experts have predicted. A group of British and Canadian scientists released a paper in May seeking to understand what happens after several generations of AIs are trained off each other. Improbable events are less and less likely to be reflected in its output, narrowing what the next AI — trained on that output — understands to be possible. In addition to being home to some of the world's largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-"Anderson likened it to massive pollution, writing: "Just as we've strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we're about to fill the Internet with blah."
Persons: Ross Anderson, Anderson, Mozart, Antonio Salieri, Salieri, Dr Ilia Shumailov, , NewsGuard, Shumailov Organizations: AIs, Morning, University of Cambridge, University of Oxford, Washington Post
Russian disinformation sites with Google-powered ads have more than doubled since last year, according to NewsGuard. Last year, as Russia began its invasion of Ukraine, Google and major brands placed ads on sites that promoted Kremlin propaganda. Google added that it has stopped running ads on specific pages on sites shared by Insider that show "violating content." The problem of ads appearing in sites containing disinformation is growing, as these sites take advantage of programmatic ad buying, where digital ads are purchased in an auction using automation. Programmatic ad buying is both a complicated and technical process, so advertisers often don't see where their ads are going.
TikTok uncovered two networks of pro-Russian TikTokers last year, TikTok said in a report on Thursday. They used speech synthesis software to spread pro-Russian propaganda in various languages. The accounts amassed more than 133,000 followers before being identified and removed by TikTok between July and September last year. In its report, TikTok said that Russia's "war of aggression" in Ukraine has "challenged us to confront a complex and rapidly changing environment." In a blog post that accompanied the report, TikTok public policy director Caroline Greer said the platform was able to help find innovative solutions to these "long-standing industry challenges."
A new study by a site that tracks online misinformation found that 1 in 5 videos on TikTok contain misinformation. The study found that TikTok often yields more partisan search results than Google. Gen Z users are increasingly turning to TikTok instead of Google as their primary search engine. Gen Z users have begun replacing Google with TikTok as their primary search engine, The New York Times recently reported. A new study by NewsGuard, a site that monitors misinformation across the internet, found that one in five search results on TikTok contains misinformation.
TikTok, whose users are predominantly teenagers and young adults, “repeatedly delivered videos containing false claims in the first 20 results, often within the first five,” the report states. “Google, by comparison, provided higher-quality and less-polarizing results, with far less misinformation.”A Google spokesperson declined to comment on the report when contacted by CNN. For example, a search for the question “Was the 2020 election stolen?” yielded six videos that contained false claims in the first 20 results, NewsGuard found. In response to the NewsGuard report, a TikTok spokesperson told CNN that its community guidelines “make clear that we do not allow harmful misinformation, including medical misinformation, and we will remove it from the platform. If I had kids of TikTok age, I would certainly want to know what they’re using as a search engine,” Brill said.
Total: 15